2025-04-01 23:56:54,007 [ 586694 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-04-01 23:56:54,007 [ 586694 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:97, check_args_and_update_paths) 2025-04-01 23:56:54,007 [ 586694 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:108, check_args_and_update_paths) 2025-04-01 23:56:54,007 [ 586694 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:110, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_53ric3 --privileged --dns-search='.' --memory=30709035008 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS=" -rfEps --run-id=2 --color=no --durations=0 test_refreshable_mat_view_replicated/test.py::test_long_query_cancel test_refreshable_mat_view_replicated/test.py::test_query_fail test_refreshable_mat_view_replicated/test.py::test_query_retry 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True]' test_s3_cluster/test.py::test_distributed_insert_select_with_replicated test_s3_cluster/test.py::test_distributed_s3_table_engine test_s3_cluster/test.py::test_hive_partitioning test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference test_s3_cluster/test.py::test_remote_hedged test_s3_cluster/test.py::test_remote_no_hedged test_s3_cluster/test.py::test_select_all test_s3_cluster/test.py::test_skip_unavailable_shards test_s3_cluster/test.py::test_union_all -vvv" altinityinfra/integration-tests-runner:cd6390247eca '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: random-0.2, timeout-2.2.0, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False collecting ... collected 20 items test_refreshable_mat_view_replicated/test.py::test_long_query_cancel SKIPPED [ 5%] test_refreshable_mat_view_replicated/test.py::test_query_fail SKIPPED [ 10%] test_refreshable_mat_view_replicated/test.py::test_query_retry SKIPPED [ 15%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False] SKIPPED [ 20%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True] SKIPPED [ 25%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False] SKIPPED [ 30%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True] SKIPPED [ 35%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False] SKIPPED [ 40%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True] SKIPPED [ 45%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False] SKIPPED [ 50%] test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True] SKIPPED [ 55%] test_s3_cluster/test.py::test_distributed_insert_select_with_replicated FAILED [ 60%] test_s3_cluster/test.py::test_distributed_s3_table_engine FAILED [ 65%] test_s3_cluster/test.py::test_hive_partitioning FAILED [ 70%] test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference FAILED [ 75%] test_s3_cluster/test.py::test_remote_hedged FAILED [ 80%] test_s3_cluster/test.py::test_remote_no_hedged FAILED [ 85%] test_s3_cluster/test.py::test_select_all FAILED [ 90%] test_s3_cluster/test.py::test_skip_unavailable_shards FAILED [ 95%] test_s3_cluster/test.py::test_union_all FAILED [100%] =================================== FAILURES =================================== ________________ test_distributed_insert_select_with_replicated ________________ started_cluster = def test_distributed_insert_select_with_replicated(started_cluster): first_replica_first_shard = started_cluster.instances["s0_0_0"] second_replica_first_shard = started_cluster.instances["s0_0_1"] first_replica_first_shard.query( """DROP TABLE IF EXISTS insert_select_replicated_local ON CLUSTER 'first_shard' SYNC;""" ) first_replica_first_shard.query( """ CREATE TABLE insert_select_replicated_local ON CLUSTER 'first_shard' (a String, b UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/{shard}/insert_select_with_replicated', '{replica}') ORDER BY (a, b); """ ) for replica in [first_replica_first_shard, second_replica_first_shard]: replica.query( """ SYSTEM STOP FETCHES; """ ) replica.query( """ SYSTEM STOP MERGES; """ ) > first_replica_first_shard.query( """ INSERT INTO insert_select_replicated_local SELECT * FROM s3Cluster( 'first_shard', 'http://minio1:9001/root/data/generated/*.csv', 'minio', 'minio123', 'CSV','a String, b UInt64' ) SETTINGS parallel_distributed_insert_select=1; """ ) test_s3_cluster/test.py:393: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 32, stderr: [s0_0_0] 2025.04.01 23:58:04.178578 [ 676 ] {0278b966-b697-4f32-8138-3ac1c5d0fe59} : Logical error: 'Replica info is not initialized'. E [s0_0_0] 2025.04.01 23:58:04.208254 [ 676 ] {0278b966-b697-4f32-8138-3ac1c5d0fe59} : Stack trace (when copying this message, always include the lines below): E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x0000000038031254 E 1. ./build_docker/./src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0ed05 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000b291545 E 3. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000b2addce E 4. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:727: DB::RemoteQueryExecutor::processReadTaskRequest() @ 0x0000000029aed2bb E 5. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:623: DB::RemoteQueryExecutor::processPacket(DB::Packet) @ 0x0000000029ae7b24 E 6. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:562: DB::RemoteQueryExecutor::readAsync() @ 0x0000000029aeb3eb E 7. ./build_docker/./src/Processors/Sources/RemoteSource.cpp:182: DB::RemoteSource::tryGenerate() @ 0x00000000315014a7 E 8. ./build_docker/./src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000030c354d7 E 9. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:49: DB::ExecutionThreadContext::executeTask() @ 0x0000000030c711ce E 10. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:290: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic*) @ 0x0000000030c55e51 E 11. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:256: DB::PipelineExecutor::executeImpl(unsigned long, bool) @ 0x0000000030c543fc E 12. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:127: DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000030c53ee2 E 13. ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:49: void std::__function::__policy_invoker::__call_impl::ThreadFromGlobalPoolImpl(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x0000000030c51f07 E 14. ./contrib/llvm-project/libcxx/include/__functional/function.h:848: ? @ 0x000000001bdda21b E 15. ./contrib/llvm-project/libcxx/include/__functional/invoke.h:359: ? @ 0x000000001bde74f0 E 16. asan_thread_start(void*) @ 0x000000000b240e77 E 17. ? @ 0x00007f0e93988ac3 E 18. ? @ 0x00007f0e93a1a850 E E [s0_0_0] 2025.04.01 23:58:04.209730 [ 694 ] BaseDaemon: ######################################## E [s0_0_0] 2025.04.01 23:58:04.209877 [ 694 ] BaseDaemon: (version 24.12.2.20221.altinityantalya (altinity build), build id: E3F43B0C9BFE6311FC1B0D8F77858862995FA832, git hash: 82252d159dc02cab0f366aaa5691adc1545dd11d) (from thread 676) (query_id: 0278b966-b697-4f32-8138-3ac1c5d0fe59) (query: INSERT INTO insert_select_replicated_local SETTINGS parallel_distributed_insert_select = 1 SELECT * FROM s3Cluster('first_shard', 'http://minio1:9001/root/data/generated/*.csv', 'minio', '[HIDDEN]', 'CSV', 'a String, b UInt64') SETTINGS parallel_distributed_insert_select = 1) Received signal Aborted (6) E [s0_0_0] 2025.04.01 23:58:04.209971 [ 694 ] BaseDaemon: E [s0_0_0] 2025.04.01 23:58:04.210037 [ 694 ] BaseDaemon: Stack trace: 0x00005574b34b432d 0x00005574b3b12aba 0x00007f0e93936520 0x00007f0e9398a9fd 0x00007f0e93936476 0x00007f0e9391c7f3 0x00005574b3438c6e 0x00005574b343a21c 0x00005574a2abc545 0x00005574a2ad8dce 0x00005574c13182bb 0x00005574c1312b24 0x00005574c13163eb 0x00005574c8d2c4a7 0x00005574c84604d7 0x00005574c849c1ce 0x00005574c8480e51 0x00005574c847f3fc 0x00005574c847eee2 0x00005574c847cf07 0x00005574b360521b 0x00005574b36124f0 0x00005574a2a6be77 0x00007f0e93988ac3 0x00007f0e93a1a850 E [s0_0_0] 2025.04.01 23:58:04.255335 [ 694 ] BaseDaemon: 0.0. inlined from ./build_docker/./src/Common/StackTrace.cpp:381: StackTrace::tryCapture() E [s0_0_0] 2025.04.01 23:58:04.255516 [ 694 ] BaseDaemon: 0. ./build_docker/./src/Common/StackTrace.cpp:350: StackTrace::StackTrace(ucontext_t const&) @ 0x000000001bc8932d E [s0_0_0] 2025.04.01 23:58:04.308384 [ 694 ] BaseDaemon: 1. ./build_docker/./src/Common/SignalHandlers.cpp:102: signalHandler(int, siginfo_t*, void*) @ 0x000000001c2e7aba E [s0_0_0] 2025.04.01 23:58:04.308537 [ 694 ] BaseDaemon: 2. ? @ 0x00007f0e93936520 E [s0_0_0] 2025.04.01 23:58:04.308604 [ 694 ] BaseDaemon: 3. ? @ 0x00007f0e9398a9fd E [s0_0_0] 2025.04.01 23:58:04.308657 [ 694 ] BaseDaemon: 4. ? @ 0x00007f0e93936476 E [s0_0_0] 2025.04.01 23:58:04.308740 [ 694 ] BaseDaemon: 5. ? @ 0x00007f0e9391c7f3 E [s0_0_0] 2025.04.01 23:58:04.378801 [ 694 ] BaseDaemon: 6. ./build_docker/./src/Common/Exception.cpp:48: DB::abortOnFailedAssertion(String const&, void* const*, unsigned long, unsigned long) @ 0x000000001bc0dc6e E [s0_0_0] 2025.04.01 23:58:04.485804 [ 694 ] BaseDaemon: 7.0. inlined from ./build_docker/./src/Common/Exception.cpp:70: DB::handle_error_code(String const&, int, bool, std::vector> const&) E [s0_0_0] 2025.04.01 23:58:04.485957 [ 694 ] BaseDaemon: 7. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0f21c E [s0_0_0] 2025.04.01 23:58:04.551519 [ 694 ] BaseDaemon: 8. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000b291545 E [s0_0_0] 2025.04.01 23:58:04.583678 [ 694 ] BaseDaemon: 9. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000b2addce E [s0_0_0] 2025.04.01 23:58:04.738435 [ 694 ] BaseDaemon: 10. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:727: DB::RemoteQueryExecutor::processReadTaskRequest() @ 0x0000000029aed2bb E [s0_0_0] 2025.04.01 23:58:04.909896 [ 694 ] BaseDaemon: 11. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:623: DB::RemoteQueryExecutor::processPacket(DB::Packet) @ 0x0000000029ae7b24 E [s0_0_0] 2025.04.01 23:58:05.069932 [ 694 ] BaseDaemon: 12. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:562: DB::RemoteQueryExecutor::readAsync() @ 0x0000000029aeb3eb E [s0_0_0] 2025.04.01 23:58:05.114626 [ 694 ] BaseDaemon: 13. ./build_docker/./src/Processors/Sources/RemoteSource.cpp:182: DB::RemoteSource::tryGenerate() @ 0x00000000315014a7 E [s0_0_0] 2025.04.01 23:58:05.155187 [ 694 ] BaseDaemon: 14. ./build_docker/./src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000030c354d7 E [s0_0_0] 2025.04.01 23:58:05.179766 [ 694 ] BaseDaemon: 15.0. inlined from ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:49: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) E [s0_0_0] 2025.04.01 23:58:05.179974 [ 694 ] BaseDaemon: 15. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:98: DB::ExecutionThreadContext::executeTask() @ 0x0000000030c711ce E [s0_0_0] 2025.04.01 23:58:05.254807 [ 694 ] BaseDaemon: 16. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:290: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic*) @ 0x0000000030c55e51 E [s0_0_0] 2025.04.01 23:58:05.325165 [ 694 ] BaseDaemon: 17.0. inlined from ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:256: DB::PipelineExecutor::executeSingleThread(unsigned long) E [s0_0_0] 2025.04.01 23:58:05.325342 [ 694 ] BaseDaemon: 17. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:442: DB::PipelineExecutor::executeImpl(unsigned long, bool) @ 0x0000000030c543fc E [s0_0_0] 2025.04.01 23:58:05.385704 [ 694 ] BaseDaemon: 18. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:127: DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000030c53ee2 E [s0_0_0] 2025.04.01 23:58:05.406669 [ 694 ] BaseDaemon: 19.0. inlined from ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:49: DB::threadFunction(DB::CompletedPipelineExecutor::Data&, std::shared_ptr, unsigned long, bool) E [s0_0_0] 2025.04.01 23:58:05.406856 [ 694 ] BaseDaemon: 19.1. inlined from ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:89: operator() E [s0_0_0] 2025.04.01 23:58:05.406917 [ 694 ] BaseDaemon: 19.2. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: ? E [s0_0_0] 2025.04.01 23:58:05.406985 [ 694 ] BaseDaemon: 19.3. inlined from ./contrib/llvm-project/libcxx/include/tuple:1789: _ZNSt3__118__apply_tuple_implB6v15007IRZN2DB25CompletedPipelineExecutor7executeEvE3$_0RNS_5tupleIJEEETpTnmJEEEDcOT_OT0_NS_15__tuple_indicesIJXspT1_EEEE E [s0_0_0] 2025.04.01 23:58:05.407062 [ 694 ] BaseDaemon: 19.4. inlined from ./contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15007]&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::tuple<>&) E [s0_0_0] 2025.04.01 23:58:05.407122 [ 694 ] BaseDaemon: 19.5. inlined from ./src/Common/ThreadPool.h:311: operator() E [s0_0_0] 2025.04.01 23:58:05.407166 [ 694 ] BaseDaemon: 19.6. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: ? E [s0_0_0] 2025.04.01 23:58:05.407216 [ 694 ] BaseDaemon: 19.7. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:479: ? E [s0_0_0] 2025.04.01 23:58:05.407275 [ 694 ] BaseDaemon: 19.8. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:235: ? E [s0_0_0] 2025.04.01 23:58:05.407323 [ 694 ] BaseDaemon: 19. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000030c51f07 E [s0_0_0] 2025.04.01 23:58:05.441140 [ 694 ] BaseDaemon: 20.0. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:848: ? E [s0_0_0] 2025.04.01 23:58:05.441288 [ 694 ] BaseDaemon: 20.1. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:1197: ? E [s0_0_0] 2025.04.01 23:58:05.441351 [ 694 ] BaseDaemon: 20. ./build_docker/./src/Common/ThreadPool.cpp:785: ThreadPoolImpl::ThreadFromThreadPool::worker() @ 0x000000001bdda21b E [s0_0_0] 2025.04.01 23:58:05.506716 [ 694 ] BaseDaemon: 21.0. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:359: ? E [s0_0_0] 2025.04.01 23:58:05.506886 [ 694 ] BaseDaemon: 21.1. inlined from ./contrib/llvm-project/libcxx/include/thread:284: void std::__thread_execute[abi:v15007]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*, 2ul>(std::tuple>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>&, std::__tuple_indices<2ul>) E [s0_0_0] 2025.04.01 23:58:05.506972 [ 694 ] BaseDaemon: 21. ./contrib/llvm-project/libcxx/include/thread:295: void* std::__thread_proxy[abi:v15007]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001bde74f0 E [s0_0_0] 2025.04.01 23:58:05.539808 [ 694 ] BaseDaemon: 22. asan_thread_start(void*) @ 0x000000000b240e77 E [s0_0_0] 2025.04.01 23:58:05.539945 [ 694 ] BaseDaemon: 23. ? @ 0x00007f0e93988ac3 E [s0_0_0] 2025.04.01 23:58:05.539990 [ 694 ] BaseDaemon: 24. ? @ 0x00007f0e93a1a850 E [s0_0_0] 2025.04.01 23:58:06.123865 [ 694 ] BaseDaemon: Integrity check of the executable successfully passed (checksum: DB310373A4AD542A55A473D2E57E140B) E [s0_0_0] 2025.04.01 23:58:07.235615 [ 694 ] BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build. E [s0_0_0] 2025.04.01 23:58:07.235970 [ 694 ] BaseDaemon: Changed settings: parallel_distributed_insert_select = 1 E [s0_0_0] 2025.04.01 23:58:11.210134 [ 696 ] BaseDaemon: ######################################## E [s0_0_0] 2025.04.01 23:58:11.210289 [ 696 ] BaseDaemon: (version 24.12.2.20221.altinityantalya (altinity build), build id: E3F43B0C9BFE6311FC1B0D8F77858862995FA832, git hash: 82252d159dc02cab0f366aaa5691adc1545dd11d) (from thread 676) (query_id: 0278b966-b697-4f32-8138-3ac1c5d0fe59) (query: INSERT INTO insert_select_replicated_local SETTINGS parallel_distributed_insert_select = 1 SELECT * FROM s3Cluster('first_shard', 'http://minio1:9001/root/data/generated/*.csv', 'minio', '[HIDDEN]', 'CSV', 'a String, b UInt64') SETTINGS parallel_distributed_insert_select = 1) Received signal Segmentation fault (11) E [s0_0_0] 2025.04.01 23:58:11.210419 [ 696 ] BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code. E [s0_0_0] 2025.04.01 23:58:11.210500 [ 696 ] BaseDaemon: Stack trace: 0x00005574b34b432d 0x00005574b3b12aba 0x00007f0e93936520 0x00007f0e9391c899 0x00005574b3438c6e 0x00005574b343a21c 0x00005574a2abc545 0x00005574a2ad8dce 0x00005574c13182bb 0x00005574c1312b24 0x00005574c13163eb 0x00005574c8d2c4a7 0x00005574c84604d7 0x00005574c849c1ce 0x00005574c8480e51 0x00005574c847f3fc 0x00005574c847eee2 0x00005574c847cf07 0x00005574b360521b 0x00005574b36124f0 0x00005574a2a6be77 0x00007f0e93988ac3 0x00007f0e93a1a850 E [s0_0_0] 2025.04.01 23:58:11.268696 [ 696 ] BaseDaemon: 0.0. inlined from ./build_docker/./src/Common/StackTrace.cpp:381: StackTrace::tryCapture() E [s0_0_0] 2025.04.01 23:58:11.268928 [ 696 ] BaseDaemon: 0. ./build_docker/./src/Common/StackTrace.cpp:350: StackTrace::StackTrace(ucontext_t const&) @ 0x000000001bc8932d E [s0_0_0] 2025.04.01 23:58:11.314203 [ 696 ] BaseDaemon: 1. ./build_docker/./src/Common/SignalHandlers.cpp:102: signalHandler(int, siginfo_t*, void*) @ 0x000000001c2e7aba E [s0_0_0] 2025.04.01 23:58:11.314334 [ 696 ] BaseDaemon: 2. ? @ 0x00007f0e93936520 E [s0_0_0] 2025.04.01 23:58:11.314371 [ 696 ] BaseDaemon: 3. ? @ 0x00007f0e9391c899 E [s0_0_0] 2025.04.01 23:58:11.387739 [ 696 ] BaseDaemon: 4. ./build_docker/./src/Common/Exception.cpp:48: DB::abortOnFailedAssertion(String const&, void* const*, unsigned long, unsigned long) @ 0x000000001bc0dc6e E [s0_0_0] 2025.04.01 23:58:11.460957 [ 696 ] BaseDaemon: 5.0. inlined from ./build_docker/./src/Common/Exception.cpp:70: DB::handle_error_code(String const&, int, bool, std::vector> const&) E [s0_0_0] 2025.04.01 23:58:11.461146 [ 696 ] BaseDaemon: 5. ./build_docker/./src/Common/Exception.cpp:111: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0f21c E [s0_0_0] 2025.04.01 23:58:11.509157 [ 696 ] BaseDaemon: 6. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000b291545 E [s0_0_0] 2025.04.01 23:58:11.542134 [ 696 ] BaseDaemon: 7. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000b2addce E [s0_0_0] 2025.04.01 23:58:11.692149 [ 696 ] BaseDaemon: 8. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:727: DB::RemoteQueryExecutor::processReadTaskRequest() @ 0x0000000029aed2bb E [s0_0_0] 2025.04.01 23:58:11.837022 [ 696 ] BaseDaemon: 9. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:623: DB::RemoteQueryExecutor::processPacket(DB::Packet) @ 0x0000000029ae7b24 E [s0_0_0] 2025.04.01 23:58:12.047986 [ 696 ] BaseDaemon: 10. ./build_docker/./src/QueryPipeline/RemoteQueryExecutor.cpp:562: DB::RemoteQueryExecutor::readAsync() @ 0x0000000029aeb3eb E [s0_0_0] 2025.04.01 23:58:12.112268 [ 696 ] BaseDaemon: 11. ./build_docker/./src/Processors/Sources/RemoteSource.cpp:182: DB::RemoteSource::tryGenerate() @ 0x00000000315014a7 E [s0_0_0] 2025.04.01 23:58:12.144719 [ 696 ] BaseDaemon: 12. ./build_docker/./src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000030c354d7 E [s0_0_0] 2025.04.01 23:58:12.159515 [ 696 ] BaseDaemon: 13.0. inlined from ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:49: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) E [s0_0_0] 2025.04.01 23:58:12.159611 [ 696 ] BaseDaemon: 13. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:98: DB::ExecutionThreadContext::executeTask() @ 0x0000000030c711ce E [s0_0_0] 2025.04.01 23:58:12.203259 [ 696 ] BaseDaemon: 14. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:290: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic*) @ 0x0000000030c55e51 E [s0_0_0] 2025.04.01 23:58:12.243429 [ 696 ] BaseDaemon: 15.0. inlined from ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:256: DB::PipelineExecutor::executeSingleThread(unsigned long) E [s0_0_0] 2025.04.01 23:58:12.243547 [ 696 ] BaseDaemon: 15. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:442: DB::PipelineExecutor::executeImpl(unsigned long, bool) @ 0x0000000030c543fc E [s0_0_0] 2025.04.01 23:58:12.286913 [ 696 ] BaseDaemon: 16. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:127: DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000030c53ee2 E [s0_0_0] 2025.04.01 23:58:12.305945 [ 696 ] BaseDaemon: 17.0. inlined from ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:49: DB::threadFunction(DB::CompletedPipelineExecutor::Data&, std::shared_ptr, unsigned long, bool) E [s0_0_0] 2025.04.01 23:58:12.306060 [ 696 ] BaseDaemon: 17.1. inlined from ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:89: operator() E [s0_0_0] 2025.04.01 23:58:12.306107 [ 696 ] BaseDaemon: 17.2. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: ? E [s0_0_0] 2025.04.01 23:58:12.306170 [ 696 ] BaseDaemon: 17.3. inlined from ./contrib/llvm-project/libcxx/include/tuple:1789: _ZNSt3__118__apply_tuple_implB6v15007IRZN2DB25CompletedPipelineExecutor7executeEvE3$_0RNS_5tupleIJEEETpTnmJEEEDcOT_OT0_NS_15__tuple_indicesIJXspT1_EEEE E [s0_0_0] 2025.04.01 23:58:12.306229 [ 696 ] BaseDaemon: 17.4. inlined from ./contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15007]&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::tuple<>&) E [s0_0_0] 2025.04.01 23:58:12.306274 [ 696 ] BaseDaemon: 17.5. inlined from ./src/Common/ThreadPool.h:311: operator() E [s0_0_0] 2025.04.01 23:58:12.306310 [ 696 ] BaseDaemon: 17.6. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: ? E [s0_0_0] 2025.04.01 23:58:12.306349 [ 696 ] BaseDaemon: 17.7. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:479: ? E [s0_0_0] 2025.04.01 23:58:12.306393 [ 696 ] BaseDaemon: 17.8. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:235: ? E [s0_0_0] 2025.04.01 23:58:12.306434 [ 696 ] BaseDaemon: 17. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000030c51f07 E [s0_0_0] 2025.04.01 23:58:12.339526 [ 696 ] BaseDaemon: 18.0. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:848: ? E [s0_0_0] 2025.04.01 23:58:12.339613 [ 696 ] BaseDaemon: 18.1. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:1197: ? E [s0_0_0] 2025.04.01 23:58:12.339686 [ 696 ] BaseDaemon: 18. ./build_docker/./src/Common/ThreadPool.cpp:785: ThreadPoolImpl::ThreadFromThreadPool::worker() @ 0x000000001bdda21b E [s0_0_0] 2025.04.01 23:58:12.404976 [ 696 ] BaseDaemon: 19.0. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:359: ? E [s0_0_0] 2025.04.01 23:58:12.405133 [ 696 ] BaseDaemon: 19.1. inlined from ./contrib/llvm-project/libcxx/include/thread:284: void std::__thread_execute[abi:v15007]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*, 2ul>(std::tuple>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>&, std::__tuple_indices<2ul>) E [s0_0_0] 2025.04.01 23:58:12.405207 [ 696 ] BaseDaemon: 19. ./contrib/llvm-project/libcxx/include/thread:295: void* std::__thread_proxy[abi:v15007]>, void (ThreadPoolImpl::ThreadFromThreadPool::*)(), ThreadPoolImpl::ThreadFromThreadPool*>>(void*) @ 0x000000001bde74f0 E [s0_0_0] 2025.04.01 23:58:12.436525 [ 696 ] BaseDaemon: 20. asan_thread_start(void*) @ 0x000000000b240e77 E [s0_0_0] 2025.04.01 23:58:12.436650 [ 696 ] BaseDaemon: 21. ? @ 0x00007f0e93988ac3 E [s0_0_0] 2025.04.01 23:58:12.436697 [ 696 ] BaseDaemon: 22. ? @ 0x00007f0e93a1a850 E [s0_0_0] 2025.04.01 23:58:12.921173 [ 696 ] BaseDaemon: Integrity check of the executable successfully passed (checksum: DB310373A4AD542A55A473D2E57E140B) E [s0_0_0] 2025.04.01 23:58:14.100872 [ 696 ] BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build. E [s0_0_0] 2025.04.01 23:58:14.101183 [ 696 ] BaseDaemon: Changed settings: parallel_distributed_insert_select = 1 E Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from 172.16.2.9:9000. (ATTEMPT_TO_READ_AFTER_EOF), Stack trace (when copying this message, always include the lines below): E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x0000000038031254 E 1. ./build_docker/./src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0ed05 E 2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000b291545 E 3. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000b2addce E 4. ./build_docker/./src/IO/VarInt.cpp:13: DB::throwReadAfterEOF() @ 0x000000001bd5bc0b E 5. ./src/IO/VarInt.h:78: void DB::varint_impl::readVarUInt(unsigned long&, DB::ReadBuffer&) @ 0x000000001bd083d0 E 6. ./src/IO/VarInt.h:96: DB::Connection::receivePacket() @ 0x00000000307f4ffa E 7. ./build_docker/./src/Client/ClientBase.cpp:1247: DB::ClientBase::receiveAndProcessPacket(std::shared_ptr, bool) @ 0x00000000307a2afe E 8. ./build_docker/./src/Client/ClientBase.cpp:1222: DB::ClientBase::receiveResult(std::shared_ptr, int, bool) @ 0x00000000307a1867 E 9. ./build_docker/./src/Client/ClientBase.cpp:1137: DB::ClientBase::processOrdinaryQuery(String const&, std::shared_ptr) @ 0x000000003079fbf1 E 10. ./build_docker/./src/Client/ClientBase.cpp:2095: DB::ClientBase::processParsedSingleQuery(String const&, String const&, std::shared_ptr, std::optional, bool) @ 0x000000003079bfc3 E 11. ./build_docker/./src/Client/ClientBase.cpp:2436: DB::ClientBase::executeMultiQuery(String const&) @ 0x00000000307b41f8 E 12. ./build_docker/./src/Client/ClientBase.cpp:2582: DB::ClientBase::processQueryText(String const&) @ 0x00000000307b602a E 13. ./build_docker/./src/Client/ClientBase.cpp:2938: DB::ClientBase::runNonInteractive() @ 0x00000000307bc57b E 14. ./build_docker/./programs/client/Client.cpp:405: DB::Client::main(std::vector> const&) @ 0x000000001bff459f E 15. ./build_docker/./base/poco/Util/src/Application.cpp:315: Poco::Util::Application::run() @ 0x00000000382716b7 E 16. ./build_docker/./programs/client/Client.cpp:1399: mainEntryClickHouseClient(int, char**) @ 0x000000001c010ec9 E 17. ./build_docker/./programs/main.cpp:269: main @ 0x000000000b27d81f E 18. ? @ 0x00007fe34081dd90 E 19. ? @ 0x00007fe34081de40 E 20. _start @ 0x000000000b1a602e E (version 24.12.2.20221.altinityantalya (altinity build)) E (query: INSERT INTO insert_select_replicated_local SELECT * FROM s3Cluster( E 'first_shard', E 'http://minio1:9001/root/data/generated/*.csv', 'minio', 'minio123', 'CSV','a String, b UInt64' E ) SETTINGS parallel_distributed_insert_select=1;) helpers/client.py:248: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml data/clickhouse/part1.csv data/clickhouse/part123.csv data/database/part2.csv data/database/partition675.csv data/generated/file_0.csv data/generated/file_1.csv data/generated/file_10.csv data/generated/file_11.csv data/generated/file_12.csv data/generated/file_13.csv data/generated/file_14.csv data/generated/file_15.csv data/generated/file_16.csv data/generated/file_17.csv data/generated/file_18.csv data/generated/file_19.csv data/generated/file_2.csv data/generated/file_20.csv data/generated/file_21.csv data/generated/file_22.csv data/generated/file_23.csv data/generated/file_24.csv data/generated/file_25.csv data/generated/file_26.csv data/generated/file_27.csv data/generated/file_28.csv data/generated/file_29.csv data/generated/file_3.csv data/generated/file_30.csv data/generated/file_31.csv data/generated/file_32.csv data/generated/file_33.csv data/generated/file_34.csv data/generated/file_35.csv data/generated/file_36.csv data/generated/file_37.csv data/generated/file_38.csv data/generated/file_39.csv data/generated/file_4.csv data/generated/file_40.csv data/generated/file_41.csv data/generated/file_42.csv data/generated/file_43.csv data/generated/file_44.csv data/generated/file_45.csv data/generated/file_46.csv data/generated/file_47.csv data/generated/file_48.csv data/generated/file_49.csv data/generated/file_5.csv data/generated/file_50.csv data/generated/file_51.csv data/generated/file_52.csv data/generated/file_53.csv data/generated/file_54.csv data/generated/file_55.csv data/generated/file_56.csv data/generated/file_57.csv data/generated/file_58.csv data/generated/file_59.csv data/generated/file_6.csv data/generated/file_60.csv data/generated/file_61.csv data/generated/file_62.csv data/generated/file_63.csv data/generated/file_64.csv data/generated/file_65.csv data/generated/file_66.csv data/generated/file_67.csv data/generated/file_68.csv data/generated/file_69.csv data/generated/file_7.csv data/generated/file_70.csv data/generated/file_71.csv data/generated/file_72.csv data/generated/file_73.csv data/generated/file_74.csv data/generated/file_75.csv data/generated/file_76.csv data/generated/file_77.csv data/generated/file_78.csv data/generated/file_79.csv data/generated/file_8.csv data/generated/file_80.csv data/generated/file_81.csv data/generated/file_82.csv data/generated/file_83.csv data/generated/file_84.csv data/generated/file_85.csv data/generated/file_86.csv data/generated/file_87.csv data/generated/file_88.csv data/generated/file_89.csv data/generated/file_9.csv data/generated/file_90.csv data/generated/file_91.csv data/generated/file_92.csv data/generated/file_93.csv data/generated/file_94.csv data/generated/file_95.csv data/generated/file_96.csv data/generated/file_97.csv data/generated/file_98.csv data/generated/file_99.csv ---------------------------- Captured stderr setup ----------------------------- ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 ENV HOSTNAME 3a0ad8bfd426 ENV SHLVL 0 ENV HOME /root ENV OLDPWD / ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS -rfEps --run-id=2 --color=no --durations=0 test_refreshable_mat_view_replicated/test.py::test_long_query_cancel test_refreshable_mat_view_replicated/test.py::test_query_fail test_refreshable_mat_view_replicated/test.py::test_query_retry 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True]' test_s3_cluster/test.py::test_distributed_insert_select_with_replicated test_s3_cluster/test.py::test_distributed_s3_table_engine test_s3_cluster/test.py::test_hive_partitioning test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference test_s3_cluster/test.py::test_remote_hedged test_s3_cluster/test.py::test_remote_no_hedged test_s3_cluster/test.py::test_select_all test_s3_cluster/test.py::test_skip_unavailable_shards test_s3_cluster/test.py::test_union_all -vvv ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge ENV COMPOSE_HTTP_TIMEOUT 600 ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest ENV DOCKER_CHANNEL stable ENV DOCKER_CLIENT_TIMEOUT 300 ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config ENV TZ Etc/UTC ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV DOCKER_BASE_TAG 6712d5cc610d ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV LC_CTYPE C.UTF-8 ENV INTEGRATION_TESTS_RUN_ID 2 ENV WORKER_FREE_PORTS 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 ENV PYTEST_CURRENT_TEST test_s3_cluster/test.py::test_distributed_insert_select_with_replicated (setup) CLUSTER INIT base_config_dir:/clickhouse-config clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Setup Keeper Cluster name: project_name:roottests3cluster. Added instance name:s0_0_0 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env', '--project-name', 'roottests3cluster', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottests3cluster. Added instance name:s0_0_1 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env', '--project-name', 'roottests3cluster', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottests3cluster. Added instance name:s0_1_0 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env', '--project-name', 'roottests3cluster', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Starting cluster... Running tests in /ClickHouse/tests/integration/test_s3_cluster/test.py Cluster start called. is_up=False Docker networks for project roottests3cluster are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottests3cluster are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottests3cluster are DRIVER VOLUME NAME Cleanup called Docker networks for project roottests3cluster are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottests3cluster are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottests3cluster are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottests3cluster-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottests3cluster Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: s0_0_0 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/configs/config.d Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/database Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: s0_0_1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/database Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: s0_1_0 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/configs/config.d Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/database Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:6712d5cc610d', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/coordination', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml pull] Stderr: zoo2 Skipped - Image is already being pulled by zoo1 Stderr: zoo3 Skipped - Image is already being pulled by zoo1 Stderr: s0_0_1 Skipped - Image is already being pulled by zoo1 Stderr: s0_1_0 Skipped - Image is already being pulled by zoo1 Stderr: s0_0_0 Skipped - Image is already being pulled by zoo1 Stderr: proxy1 Skipped - Image is already being pulled by proxy2 Stderr: proxy2 Pulling Stderr: zoo1 Pulling Stderr: resolver Pulling Stderr: minio1 Pulling Stderr: resolver Pulled Stderr: minio1 Pulled Stderr: proxy2 Pulled Stderr: zoo1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/coordination', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/coordination', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/coordination'] Command:[docker compose --project-name roottests3cluster --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr:time="2025-04-01T23:57:53Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottests3cluster_default Creating Stderr: Network roottests3cluster_default Created Stderr: Container roottests3cluster-zoo2-1 Creating Stderr: Container roottests3cluster-zoo3-1 Creating Stderr: Container roottests3cluster-zoo1-1 Creating Stderr: Container roottests3cluster-zoo2-1 Created Stderr: Container roottests3cluster-zoo1-1 Created Stderr: Container roottests3cluster-zoo3-1 Created Stderr: Container roottests3cluster-zoo2-1 Starting Stderr: Container roottests3cluster-zoo3-1 Starting Stderr: Container roottests3cluster-zoo1-1 Starting Stderr: Container roottests3cluster-zoo2-1 Started Stderr: Container roottests3cluster-zoo3-1 Started Stderr: Container roottests3cluster-zoo1-1 Started Stderr:time="2025-04-01T23:57:54Z" level=debug msg="otel error" error="" Stderr:time="2025-04-01T23:57:54Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottests3cluster-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottests3cluster-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.2.2, port:2181, use_ssl:False Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottests3cluster-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.2.3, port:2181, use_ssl:False Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') Trying to create Minio instance by command docker compose --project-name roottests3cluster --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name roottests3cluster --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] Stderr:time="2025-04-01T23:57:57Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Volume "roottests3cluster_data1-1" Creating Stderr: Volume "roottests3cluster_data1-1" Created Stderr:time="2025-04-01T23:57:57Z" level=warning msg="Found orphan containers ([roottests3cluster-zoo2-1 roottests3cluster-zoo1-1 roottests3cluster-zoo3-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." Stderr: Container roottests3cluster-proxy2-1 Creating Stderr: Container roottests3cluster-proxy1-1 Creating Stderr: proxy1 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested Stderr: proxy2 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested Stderr: Container roottests3cluster-proxy2-1 Created Stderr: Container roottests3cluster-proxy1-1 Created Stderr: Container roottests3cluster-resolver-1 Creating Stderr: Container roottests3cluster-minio1-1 Creating Stderr: Container roottests3cluster-minio1-1 Created Stderr: Container roottests3cluster-resolver-1 Created Stderr: Container roottests3cluster-proxy1-1 Starting Stderr: Container roottests3cluster-proxy2-1 Starting Stderr: Container roottests3cluster-proxy1-1 Started Stderr: Container roottests3cluster-proxy2-1 Started Stderr: Container roottests3cluster-minio1-1 Starting Stderr: Container roottests3cluster-resolver-1 Starting Stderr: Container roottests3cluster-resolver-1 Started Stderr: Container roottests3cluster-minio1-1 Started Stderr:time="2025-04-01T23:57:58Z" level=debug msg="otel error" error="" Stderr:time="2025-04-01T23:57:58Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/roottests3cluster-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/roottests3cluster-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.2.8:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.2.8:9001 Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.2.8:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (4): 172.16.2.8:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.2.8', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Starting new HTTP connection (5): 172.16.2.8:9001 http://172.16.2.8:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. http://172.16.2.8:9001 "GET /root?location= HTTP/1.1" 404 0 http://172.16.2.8:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.2.8:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.2.8:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml up -d --no-recreate] Stderr: Container roottests3cluster-zoo1-1 Running Stderr: Container roottests3cluster-zoo2-1 Running Stderr: Container roottests3cluster-proxy1-1 Running Stderr: Container roottests3cluster-proxy2-1 Running Stderr: Container roottests3cluster-zoo3-1 Running Stderr: Container roottests3cluster-minio1-1 Running Stderr: Container roottests3cluster-resolver-1 Running Stderr: Container roottests3cluster-s0_0_1-1 Creating Stderr: Container roottests3cluster-s0_0_0-1 Creating Stderr: Container roottests3cluster-s0_1_0-1 Creating Stderr: Container roottests3cluster-s0_0_0-1 Created Stderr: Container roottests3cluster-s0_1_0-1 Created Stderr: Container roottests3cluster-s0_0_1-1 Created Stderr: Container roottests3cluster-s0_1_0-1 Starting Stderr: Container roottests3cluster-s0_0_0-1 Starting Stderr: Container roottests3cluster-s0_0_1-1 Starting Stderr: Container roottests3cluster-s0_0_1-1 Started Stderr: Container roottests3cluster-s0_1_0-1 Started Stderr: Container roottests3cluster-s0_0_0-1 Started ClickHouse instance created get_instance_ip instance_name=s0_0_0 http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_0-1/json HTTP/1.1" 200 None get_instance_ip instance_name=s0_0_0 http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_0-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_0_0, ip: 172.16.2.9... http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_0-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None ClickHouse s0_0_0 started get_instance_ip instance_name=s0_0_1 http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=s0_0_1 http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_0_1, ip: 172.16.2.11... http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/588241bfab92c5d0837319b31ffaa6314125365974ef19e724e9b4126c22eb2b/json HTTP/1.1" 200 None ClickHouse s0_0_1 started get_instance_ip instance_name=s0_1_0 http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_1_0-1/json HTTP/1.1" 200 None get_instance_ip instance_name=s0_1_0 http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_1_0-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_1_0, ip: 172.16.2.10... http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_1_0-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2a69e323746433f66317dec67bb84f57fc95d059a7382fe645a5ecc26f908570/json HTTP/1.1" 200 None ClickHouse s0_1_0 started Cluster started http://172.16.2.8:9001 "PUT /root/data/clickhouse/part1.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/clickhouse/part123.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/database/part2.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/database/partition675.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_0.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_1.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_2.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_3.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_4.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_5.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_6.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_7.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_8.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_9.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_10.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_11.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_12.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_13.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_14.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_15.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_16.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_17.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_18.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_19.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_20.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_21.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_22.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_23.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_24.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_25.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_26.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_27.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_28.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_29.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_30.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_31.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_32.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_33.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_34.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_35.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_36.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_37.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_38.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_39.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_40.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_41.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_42.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_43.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_44.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_45.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_46.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_47.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_48.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_49.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_50.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_51.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_52.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_53.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_54.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_55.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_56.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_57.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_58.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_59.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_60.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_61.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_62.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_63.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_64.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_65.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_66.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_67.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_68.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_69.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_70.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_71.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_72.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_73.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_74.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_75.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_76.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_77.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_78.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_79.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_80.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_81.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_82.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_83.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_84.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_85.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_86.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_87.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_88.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_89.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_90.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_91.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_92.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_93.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_94.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_95.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_96.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_97.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_98.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "PUT /root/data/generated/file_99.csv HTTP/1.1" 200 0 http://172.16.2.8:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix= HTTP/1.1" 200 0 Starting mock server s3_mock.py run container_id:roottests3cluster-resolver-1 detach:False nothrow:False cmd: ['bash', '-c', 'echo aW1wb3J0IHN5cwoKZnJvbSBib3R0bGUgaW1wb3J0IHJlcXVlc3QsIHJlc3BvbnNlLCByb3V0ZSwgcnVuCgoKQHJvdXRlKCIvPF9idWNrZXQ+LzxfcGF0aDpwYXRoPiIpCmRlZiBzZXJ2ZXIoX2J1Y2tldCwgX3BhdGgpOgogICAgcmVzdWx0ID0gKAogICAgICAgIHJlcXVlc3QuaGVhZGVyc1siTXlDdXN0b21IZWFkZXIiXQogICAgICAgIGlmICJNeUN1c3RvbUhlYWRlciIgaW4gcmVxdWVzdC5oZWFkZXJzCiAgICAgICAgZWxzZSAidW5rbm93biIKICAgICkKICAgIHJlc3BvbnNlLmNvbnRlbnRfdHlwZSA9ICJ0ZXh0L3BsYWluIgogICAgcmVzcG9uc2Uuc2V0X2hlYWRlcigiQ29udGVudC1MZW5ndGgiLCBsZW4ocmVzdWx0KSkKICAgIHJldHVybiByZXN1bHQKCgpAcm91dGUoIi8iKQpkZWYgcGluZygpOgogICAgcmVzcG9uc2UuY29udGVudF90eXBlID0gInRleHQvcGxhaW4iCiAgICByZXNwb25zZS5zZXRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIDIpCiAgICByZXR1cm4gIk9LIgoKCnJ1bihob3N0PSIwLjAuMC4wIiwgcG9ydD1pbnQoc3lzLmFyZ3ZbMV0pKQo= | base64 --decode > s3_mock.py'] Command:[docker exec roottests3cluster-resolver-1 bash -c echo aW1wb3J0IHN5cwoKZnJvbSBib3R0bGUgaW1wb3J0IHJlcXVlc3QsIHJlc3BvbnNlLCByb3V0ZSwgcnVuCgoKQHJvdXRlKCIvPF9idWNrZXQ+LzxfcGF0aDpwYXRoPiIpCmRlZiBzZXJ2ZXIoX2J1Y2tldCwgX3BhdGgpOgogICAgcmVzdWx0ID0gKAogICAgICAgIHJlcXVlc3QuaGVhZGVyc1siTXlDdXN0b21IZWFkZXIiXQogICAgICAgIGlmICJNeUN1c3RvbUhlYWRlciIgaW4gcmVxdWVzdC5oZWFkZXJzCiAgICAgICAgZWxzZSAidW5rbm93biIKICAgICkKICAgIHJlc3BvbnNlLmNvbnRlbnRfdHlwZSA9ICJ0ZXh0L3BsYWluIgogICAgcmVzcG9uc2Uuc2V0X2hlYWRlcigiQ29udGVudC1MZW5ndGgiLCBsZW4ocmVzdWx0KSkKICAgIHJldHVybiByZXN1bHQKCgpAcm91dGUoIi8iKQpkZWYgcGluZygpOgogICAgcmVzcG9uc2UuY29udGVudF90eXBlID0gInRleHQvcGxhaW4iCiAgICByZXNwb25zZS5zZXRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIDIpCiAgICByZXR1cm4gIk9LIgoKCnJ1bihob3N0PSIwLjAuMC4wIiwgcG9ydD1pbnQoc3lzLmFyZ3ZbMV0pKQo= | base64 --decode > s3_mock.py] run container_id:roottests3cluster-resolver-1 detach:True nothrow:False cmd: ['bash', '-c', 'python3 s3_mock.py 8080 >/var/log/resolver/s3_mock.log 2>/var/log/resolver/s3_mock.err.log'] Command:[docker exec roottests3cluster-resolver-1 bash -c python3 s3_mock.py 8080 >/var/log/resolver/s3_mock.log 2>/var/log/resolver/s3_mock.err.log] run container_id:roottests3cluster-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8080/'] Command:[docker exec roottests3cluster-resolver-1 curl -s http://localhost:8080/] Exitcode:7 run container_id:roottests3cluster-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8080/'] Command:[docker exec roottests3cluster-resolver-1 curl -s http://localhost:8080/] Stdout:OK s3_mock.py answered OK on attempt 2 Mock server s3_mock.py started ------------------------------ Captured log setup ------------------------------ 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV HOSTNAME 3a0ad8bfd426 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV SHLVL 0 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV HOME /root (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV OLDPWD / (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_HELPER_TAG 5dc43a6382f0 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV PYTHONUNBUFFERED 1 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV UBSAN_OPTIONS print_stacktrace=1 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV PYTEST_ADDOPTS -rfEps --run-id=2 --color=no --durations=0 test_refreshable_mat_view_replicated/test.py::test_long_query_cancel test_refreshable_mat_view_replicated/test.py::test_query_fail test_refreshable_mat_view_replicated/test.py::test_query_retry 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True]' test_s3_cluster/test.py::test_distributed_insert_select_with_replicated test_s3_cluster/test.py::test_distributed_s3_table_engine test_s3_cluster/test.py::test_hive_partitioning test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference test_s3_cluster/test.py::test_remote_hedged test_s3_cluster/test.py::test_remote_no_hedged test_s3_cluster/test.py::test_select_all test_s3_cluster/test.py::test_skip_unavailable_shards test_s3_cluster/test.py::test_union_all -vvv (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV COMPOSE_HTTP_TIMEOUT 600 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_KERBERIZED_HADOOP_TAG latest (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_CHANNEL stable (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_CLIENT_TIMEOUT 300 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV PWD /ClickHouse/tests/integration (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV TZ Etc/UTC (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV DOCKER_BASE_TAG 6712d5cc610d (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV LC_CTYPE C.UTF-8 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV INTEGRATION_TESTS_RUN_ID 2 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV WORKER_FREE_PORTS 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : ENV PYTEST_CURRENT_TEST test_s3_cluster/test.py::test_distributed_insert_select_with_replicated (setup) (cluster.py:450, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : CLUSTER INIT base_config_dir:/clickhouse-config (cluster.py:774, __init__) 2025-04-01 23:57:42 [ 660 ] DEBUG : clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log (cluster.py:1729, add_instance) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup Keeper (cluster.py:1069, setup_keeper_cmd) 2025-04-01 23:57:42 [ 660 ] DEBUG : Cluster name: project_name:roottests3cluster. Added instance name:s0_0_0 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env', '--project-name', 'roottests3cluster', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ (cluster.py:2025, add_instance) 2025-04-01 23:57:42 [ 660 ] DEBUG : clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log (cluster.py:1729, add_instance) 2025-04-01 23:57:42 [ 660 ] DEBUG : Cluster name: project_name:roottests3cluster. Added instance name:s0_0_1 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env', '--project-name', 'roottests3cluster', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ (cluster.py:2025, add_instance) 2025-04-01 23:57:42 [ 660 ] DEBUG : clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log (cluster.py:1729, add_instance) 2025-04-01 23:57:42 [ 660 ] DEBUG : Cluster name: project_name:roottests3cluster. Added instance name:s0_1_0 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env', '--project-name', 'roottests3cluster', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ (cluster.py:2025, add_instance) 2025-04-01 23:57:42 [ 660 ] INFO : Starting cluster... (test.py:94, started_cluster) 2025-04-01 23:57:42 [ 660 ] INFO : Running tests in /ClickHouse/tests/integration/test_s3_cluster/test.py (cluster.py:2793, start) 2025-04-01 23:57:42 [ 660 ] DEBUG : Cluster start called. is_up=False (cluster.py:2800, start) 2025-04-01 23:57:42 [ 660 ] DEBUG : Docker networks for project roottests3cluster are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-01 23:57:42 [ 660 ] DEBUG : Docker containers for project roottests3cluster are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-01 23:57:42 [ 660 ] DEBUG : Docker volumes for project roottests3cluster are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-01 23:57:42 [ 660 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Docker networks for project roottests3cluster are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-01 23:57:42 [ 660 ] DEBUG : Docker containers for project roottests3cluster are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-01 23:57:42 [ 660 ] DEBUG : Docker volumes for project roottests3cluster are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-01 23:57:42 [ 660 ] DEBUG : Command:[docker container list --all --filter name='^/roottests3cluster-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-01 23:57:42 [ 660 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : No running containers for project: roottests3cluster (cluster.py:922, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-01 23:57:42 [ 660 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-01 23:57:42 [ 660 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-01 23:57:42 [ 660 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-01 23:57:42 [ 660 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup directory for instance: s0_0_0 (cluster.py:2813, start) 2025-04-01 23:57:42 [ 660 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/configs/config.d (cluster.py:4752, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/database (cluster.py:4769, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/logs (cluster.py:4780, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup directory for instance: s0_0_1 (cluster.py:2813, start) 2025-04-01 23:57:42 [ 660 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/configs/config.d (cluster.py:4752, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/database (cluster.py:4769, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/logs (cluster.py:4780, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup directory for instance: s0_1_0 (cluster.py:2813, start) 2025-04-01 23:57:42 [ 660 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/configs/config.d (cluster.py:4752, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/database (cluster.py:4769, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/logs (cluster.py:4780, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-01 23:57:42 [ 660 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:6712d5cc610d', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/coordination', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env (cluster.py:97, _create_env_file) 2025-04-01 23:57:42 [ 660 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-01 23:57:42 [ 660 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-01 23:57:42 [ 660 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-01 23:57:42 [ 660 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-01 23:57:42 [ 660 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-04-01 23:57:42 [ 660 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml pull] (cluster.py:122, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: zoo2 Skipped - Image is already being pulled by zoo1 (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: zoo3 Skipped - Image is already being pulled by zoo1 (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: s0_0_1 Skipped - Image is already being pulled by zoo1 (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: s0_1_0 Skipped - Image is already being pulled by zoo1 (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: s0_0_0 Skipped - Image is already being pulled by zoo1 (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: proxy1 Skipped - Image is already being pulled by proxy2 (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: proxy2 Pulling (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: zoo1 Pulling (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: resolver Pulling (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: minio1 Pulling (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: resolver Pulled (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: minio1 Pulled (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: proxy2 Pulled (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Stderr: zoo1 Pulled (cluster.py:148, run_and_check) 2025-04-01 23:57:53 [ 660 ] DEBUG : Setup ZooKeeper (cluster.py:2854, start) 2025-04-01 23:57:53 [ 660 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper1/coordination', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper2/coordination', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-2/keeper3/coordination'] (cluster.py:2855, start) 2025-04-01 23:57:53 [ 660 ] DEBUG : Command:[docker compose --project-name roottests3cluster --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:53Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Network roottests3cluster_default Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Network roottests3cluster_default Created (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:54Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:54Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-01 23:57:54 [ 660 ] DEBUG : Wait ZooKeeper to start (cluster.py:2466, wait_zookeeper_to_start) 2025-04-01 23:57:54 [ 660 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2082, get_instance_ip) 2025-04-01 23:57:54 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-zoo1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:54 [ 660 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False (cluster.py:3341, get_kazoo_client) 2025-04-01 23:57:54 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:54 [ 660 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-01 23:57:54 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:54 [ 660 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-01 23:57:54 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:54 [ 660 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-01 23:57:54 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:54 [ 660 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-01 23:57:55 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:55 [ 660 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-01 23:57:55 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:55 [ 660 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-04-01 23:57:57 [ 660 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-04-01 23:57:57 [ 660 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2082, get_instance_ip) 2025-04-01 23:57:57 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-zoo2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:57 [ 660 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.2.2, port:2181, use_ssl:False (cluster.py:3341, get_kazoo_client) 2025-04-01 23:57:57 [ 660 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-04-01 23:57:57 [ 660 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-04-01 23:57:57 [ 660 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2082, get_instance_ip) 2025-04-01 23:57:57 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-zoo3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:57 [ 660 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.2.3, port:2181, use_ssl:False (cluster.py:3341, get_kazoo_client) 2025-04-01 23:57:57 [ 660 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-04-01 23:57:57 [ 660 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-04-01 23:57:57 [ 660 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-04-01 23:57:57 [ 660 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-04-01 23:57:57 [ 660 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-04-01 23:57:57 [ 660 ] DEBUG : All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') (cluster.py:2482, wait_zookeeper_nodes_to_start) 2025-04-01 23:57:57 [ 660 ] INFO : Trying to create Minio instance by command docker compose --project-name roottests3cluster --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d (cluster.py:3132, start) 2025-04-01 23:57:57 [ 660 ] DEBUG : Command:[docker compose --project-name roottests3cluster --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:57Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Volume "roottests3cluster_data1-1" Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Volume "roottests3cluster_data1-1" Created (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:57Z" level=warning msg="Found orphan containers ([roottests3cluster-zoo2-1 roottests3cluster-zoo1-1 roottests3cluster-zoo3-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: proxy1 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: proxy2 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:58Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] DEBUG : Stderr:time="2025-04-01T23:57:58Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-01 23:57:58 [ 660 ] INFO : Trying to connect to Minio... (cluster.py:3138, start) 2025-04-01 23:57:58 [ 660 ] DEBUG : get_instance_ip instance_name=minio1 (cluster.py:2082, get_instance_ip) 2025-04-01 23:57:58 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-minio1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:58 [ 660 ] DEBUG : get_instance_ip instance_name=proxy1 (cluster.py:2082, get_instance_ip) 2025-04-01 23:57:58 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-proxy1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:58 [ 660 ] DEBUG : Starting new HTTP connection (1): 172.16.2.8:9001 (connectionpool.py:245, _new_conn) 2025-04-01 23:57:58 [ 660 ] DEBUG : Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) (retry.py:517, increment) 2025-04-01 23:57:58 [ 660 ] WARNING : Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / (connectionpool.py:872, urlopen) 2025-04-01 23:57:58 [ 660 ] DEBUG : Starting new HTTP connection (2): 172.16.2.8:9001 (connectionpool.py:245, _new_conn) 2025-04-01 23:57:58 [ 660 ] DEBUG : Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) (retry.py:517, increment) 2025-04-01 23:57:58 [ 660 ] WARNING : Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / (connectionpool.py:872, urlopen) 2025-04-01 23:57:58 [ 660 ] DEBUG : Starting new HTTP connection (3): 172.16.2.8:9001 (connectionpool.py:245, _new_conn) 2025-04-01 23:57:58 [ 660 ] DEBUG : Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) (retry.py:517, increment) 2025-04-01 23:57:58 [ 660 ] WARNING : Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / (connectionpool.py:872, urlopen) 2025-04-01 23:57:58 [ 660 ] DEBUG : Starting new HTTP connection (4): 172.16.2.8:9001 (connectionpool.py:245, _new_conn) 2025-04-01 23:57:58 [ 660 ] DEBUG : Can't connect to Minio: HTTPConnectionPool(host='172.16.2.8', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2637, wait_minio_to_start) 2025-04-01 23:57:59 [ 660 ] DEBUG : Starting new HTTP connection (5): 172.16.2.8:9001 (connectionpool.py:245, _new_conn) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://172.16.2.8:9001 "GET / HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : Connected to Minio. (cluster.py:2617, wait_minio_to_start) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://172.16.2.8:9001 "GET /root?location= HTTP/1.1" 404 0 (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : S3 bucket 'root' created (cluster.py:2632, wait_minio_to_start) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://172.16.2.8:9001 "GET /root2?location= HTTP/1.1" 404 0 (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root2 HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : S3 bucket 'root2' created (cluster.py:2632, wait_minio_to_start) 2025-04-01 23:57:59 [ 660 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml up -d --no-recreate') (cluster.py:3200, start) 2025-04-01 23:57:59 [ 660 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml up -d --no-recreate] (cluster.py:122, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Running (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Creating (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Created (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Starting (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Started (cluster.py:148, run_and_check) 2025-04-01 23:57:59 [ 660 ] DEBUG : ClickHouse instance created (cluster.py:3208, start) 2025-04-01 23:57:59 [ 660 ] DEBUG : get_instance_ip instance_name=s0_0_0 (cluster.py:2082, get_instance_ip) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_0-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : get_instance_ip instance_name=s0_0_0 (cluster.py:2092, get_instance_global_ipv6) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_0-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : Waiting for ClickHouse start in s0_0_0, ip: 172.16.2.9... (cluster.py:3216, start) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_0-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:57:59 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:00 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/b5a0ac001a349f3a3ea9c48772221d1a59068f4b2d7e89b4d6a3c172a2bf3a81/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : ClickHouse s0_0_0 started (cluster.py:3220, start) 2025-04-01 23:58:01 [ 660 ] DEBUG : get_instance_ip instance_name=s0_0_1 (cluster.py:2082, get_instance_ip) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : get_instance_ip instance_name=s0_0_1 (cluster.py:2092, get_instance_global_ipv6) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : Waiting for ClickHouse start in s0_0_1, ip: 172.16.2.11... (cluster.py:3216, start) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_0_1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/588241bfab92c5d0837319b31ffaa6314125365974ef19e724e9b4126c22eb2b/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : ClickHouse s0_0_1 started (cluster.py:3220, start) 2025-04-01 23:58:01 [ 660 ] DEBUG : get_instance_ip instance_name=s0_1_0 (cluster.py:2082, get_instance_ip) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_1_0-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : get_instance_ip instance_name=s0_1_0 (cluster.py:2092, get_instance_global_ipv6) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_1_0-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : Waiting for ClickHouse start in s0_1_0, ip: 172.16.2.10... (cluster.py:3216, start) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottests3cluster-s0_1_0-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://localhost:None "GET /v1.46/containers/2a69e323746433f66317dec67bb84f57fc95d059a7382fe645a5ecc26f908570/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : ClickHouse s0_1_0 started (cluster.py:3220, start) 2025-04-01 23:58:01 [ 660 ] INFO : Cluster started (test.py:96, started_cluster) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/clickhouse/part1.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/clickhouse/part123.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/database/part2.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/database/partition675.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_0.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_1.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_2.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_3.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_4.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_5.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_6.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_7.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_8.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_9.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_10.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_11.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_12.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_13.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_14.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_15.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_16.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_17.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_18.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_19.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_20.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_21.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_22.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_23.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_24.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_25.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_26.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_27.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_28.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_29.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_30.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_31.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_32.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_33.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_34.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_35.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_36.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_37.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_38.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_39.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_40.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_41.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_42.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_43.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_44.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_45.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_46.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_47.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_48.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_49.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_50.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_51.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_52.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_53.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_54.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_55.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_56.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_57.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_58.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_59.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_60.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_61.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_62.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_63.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_64.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_65.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_66.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_67.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_68.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_69.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_70.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_71.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_72.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_73.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_74.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_75.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_76.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_77.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_78.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_79.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_80.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_81.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_82.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_83.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_84.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_85.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_86.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_87.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_88.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_89.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_90.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_91.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_92.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_93.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_94.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_95.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_96.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_97.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_98.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "PUT /root/data/generated/file_99.csv HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] DEBUG : http://172.16.2.8:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix= HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-01 23:58:01 [ 660 ] INFO : Starting mock server s3_mock.py (mock_servers.py:18, start_mock_servers) 2025-04-01 23:58:01 [ 660 ] DEBUG : run container_id:roottests3cluster-resolver-1 detach:False nothrow:False cmd: ['bash', '-c', 'echo aW1wb3J0IHN5cwoKZnJvbSBib3R0bGUgaW1wb3J0IHJlcXVlc3QsIHJlc3BvbnNlLCByb3V0ZSwgcnVuCgoKQHJvdXRlKCIvPF9idWNrZXQ+LzxfcGF0aDpwYXRoPiIpCmRlZiBzZXJ2ZXIoX2J1Y2tldCwgX3BhdGgpOgogICAgcmVzdWx0ID0gKAogICAgICAgIHJlcXVlc3QuaGVhZGVyc1siTXlDdXN0b21IZWFkZXIiXQogICAgICAgIGlmICJNeUN1c3RvbUhlYWRlciIgaW4gcmVxdWVzdC5oZWFkZXJzCiAgICAgICAgZWxzZSAidW5rbm93biIKICAgICkKICAgIHJlc3BvbnNlLmNvbnRlbnRfdHlwZSA9ICJ0ZXh0L3BsYWluIgogICAgcmVzcG9uc2Uuc2V0X2hlYWRlcigiQ29udGVudC1MZW5ndGgiLCBsZW4ocmVzdWx0KSkKICAgIHJldHVybiByZXN1bHQKCgpAcm91dGUoIi8iKQpkZWYgcGluZygpOgogICAgcmVzcG9uc2UuY29udGVudF90eXBlID0gInRleHQvcGxhaW4iCiAgICByZXNwb25zZS5zZXRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIDIpCiAgICByZXR1cm4gIk9LIgoKCnJ1bihob3N0PSIwLjAuMC4wIiwgcG9ydD1pbnQoc3lzLmFyZ3ZbMV0pKQo= | base64 --decode > s3_mock.py'] (cluster.py:2126, exec_in_container) 2025-04-01 23:58:01 [ 660 ] DEBUG : Command:[docker exec roottests3cluster-resolver-1 bash -c echo aW1wb3J0IHN5cwoKZnJvbSBib3R0bGUgaW1wb3J0IHJlcXVlc3QsIHJlc3BvbnNlLCByb3V0ZSwgcnVuCgoKQHJvdXRlKCIvPF9idWNrZXQ+LzxfcGF0aDpwYXRoPiIpCmRlZiBzZXJ2ZXIoX2J1Y2tldCwgX3BhdGgpOgogICAgcmVzdWx0ID0gKAogICAgICAgIHJlcXVlc3QuaGVhZGVyc1siTXlDdXN0b21IZWFkZXIiXQogICAgICAgIGlmICJNeUN1c3RvbUhlYWRlciIgaW4gcmVxdWVzdC5oZWFkZXJzCiAgICAgICAgZWxzZSAidW5rbm93biIKICAgICkKICAgIHJlc3BvbnNlLmNvbnRlbnRfdHlwZSA9ICJ0ZXh0L3BsYWluIgogICAgcmVzcG9uc2Uuc2V0X2hlYWRlcigiQ29udGVudC1MZW5ndGgiLCBsZW4ocmVzdWx0KSkKICAgIHJldHVybiByZXN1bHQKCgpAcm91dGUoIi8iKQpkZWYgcGluZygpOgogICAgcmVzcG9uc2UuY29udGVudF90eXBlID0gInRleHQvcGxhaW4iCiAgICByZXNwb25zZS5zZXRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIDIpCiAgICByZXR1cm4gIk9LIgoKCnJ1bihob3N0PSIwLjAuMC4wIiwgcG9ydD1pbnQoc3lzLmFyZ3ZbMV0pKQo= | base64 --decode > s3_mock.py] (cluster.py:122, run_and_check) 2025-04-01 23:58:01 [ 660 ] DEBUG : run container_id:roottests3cluster-resolver-1 detach:True nothrow:False cmd: ['bash', '-c', 'python3 s3_mock.py 8080 >/var/log/resolver/s3_mock.log 2>/var/log/resolver/s3_mock.err.log'] (cluster.py:2126, exec_in_container) 2025-04-01 23:58:01 [ 660 ] DEBUG : Command:[docker exec roottests3cluster-resolver-1 bash -c python3 s3_mock.py 8080 >/var/log/resolver/s3_mock.log 2>/var/log/resolver/s3_mock.err.log] (cluster.py:122, run_and_check) 2025-04-01 23:58:01 [ 660 ] DEBUG : run container_id:roottests3cluster-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8080/'] (cluster.py:2126, exec_in_container) 2025-04-01 23:58:01 [ 660 ] DEBUG : Command:[docker exec roottests3cluster-resolver-1 curl -s http://localhost:8080/] (cluster.py:122, run_and_check) 2025-04-01 23:58:01 [ 660 ] DEBUG : Exitcode:7 (cluster.py:150, run_and_check) 2025-04-01 23:58:02 [ 660 ] DEBUG : run container_id:roottests3cluster-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8080/'] (cluster.py:2126, exec_in_container) 2025-04-01 23:58:02 [ 660 ] DEBUG : Command:[docker exec roottests3cluster-resolver-1 curl -s http://localhost:8080/] (cluster.py:122, run_and_check) 2025-04-01 23:58:02 [ 660 ] DEBUG : Stdout:OK (cluster.py:146, run_and_check) 2025-04-01 23:58:02 [ 660 ] DEBUG : s3_mock.py answered OK on attempt 2 (mock_servers.py:67, start_mock_servers) 2025-04-01 23:58:02 [ 660 ] INFO : Mock server s3_mock.py started (mock_servers.py:82, start_mock_servers) ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS insert_select_replicated_local ON CLUSTER 'first_shard' SYNC; on s0_0_0 Executing query CREATE TABLE insert_select_replicated_local ON CLUSTER 'first_shard' (a String, b UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/{shard}/insert_select_with_replicated', '{replica}') ORDER BY (a, b); on s0_0_0 Executing query SYSTEM STOP FETCHES; on s0_0_0 Executing query SYSTEM STOP MERGES; on s0_0_0 Executing query SYSTEM STOP FETCHES; on s0_0_1 Executing query SYSTEM STOP MERGES; on s0_0_1 Executing query INSERT INTO insert_select_replicated_local SELECT * FROM s3Cluster( 'first_shard', 'http://minio1:9001/root/data/generated/*.csv', 'minio', 'minio123', 'CSV','a String, b UInt64' ) SETTINGS parallel_distributed_insert_select=1; on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:58:02 [ 660 ] DEBUG : Executing query DROP TABLE IF EXISTS insert_select_replicated_local ON CLUSTER 'first_shard' SYNC; on s0_0_0 (cluster.py:3677, query) 2025-04-01 23:58:02 [ 660 ] DEBUG : Executing query CREATE TABLE insert_select_replicated_local ON CLUSTER 'first_shard' (a String, b UInt64) ENGINE=ReplicatedMergeTree('/clickhouse/tables/{shard}/insert_select_with_replicated', '{replica}') ORDER BY (a, b); on s0_0_0 (cluster.py:3677, query) 2025-04-01 23:58:03 [ 660 ] DEBUG : Executing query SYSTEM STOP FETCHES; on s0_0_0 (cluster.py:3677, query) 2025-04-01 23:58:03 [ 660 ] DEBUG : Executing query SYSTEM STOP MERGES; on s0_0_0 (cluster.py:3677, query) 2025-04-01 23:58:03 [ 660 ] DEBUG : Executing query SYSTEM STOP FETCHES; on s0_0_1 (cluster.py:3677, query) 2025-04-01 23:58:03 [ 660 ] DEBUG : Executing query SYSTEM STOP MERGES; on s0_0_1 (cluster.py:3677, query) 2025-04-01 23:58:03 [ 660 ] DEBUG : Executing query INSERT INTO insert_select_replicated_local SELECT * FROM s3Cluster( 'first_shard', 'http://minio1:9001/root/data/generated/*.csv', 'minio', 'minio123', 'CSV','a String, b UInt64' ) SETTINGS parallel_distributed_insert_select=1; on s0_0_0 (cluster.py:3677, query) _______________________ test_distributed_s3_table_engine _______________________ started_cluster = def test_distributed_s3_table_engine(started_cluster): node = started_cluster.instances["s0_0_0"] > resp_def = node.query( """ SELECT * from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) """ ) test_s3_cluster/test.py:727: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 209, stderr: Code: 209. DB::NetException: Timeout: connect timed out: 172.16.2.9:9000 (172.16.2.9:9000, connection timeout 10000 ms). (SOCKET_TIMEOUT), Stack trace (when copying this message, always include the lines below): E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x0000000038031254 E 1. ./build_docker/./src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0ed05 E 2. ./src/Common/Exception.h:105: DB::NetException::NetException(int, FormatStringHelperImpl::type, std::type_identity::type, std::type_identity::type>, String&&, String const&, long&&) @ 0x00000000307fa057 E 3. ./build_docker/./src/Client/Connection.cpp:321: DB::Connection::connect(DB::ConnectionTimeouts const&) @ 0x00000000307df744 E 4. ./build_docker/./src/Client/Connection.cpp:627: DB::Connection::getServerVersion(DB::ConnectionTimeouts const&, String&, unsigned long&, unsigned long&, unsigned long&, unsigned long&) @ 0x00000000307e6aaa E 5. ./build_docker/./programs/client/Client.cpp:479: DB::Client::connect() @ 0x000000001bff6862 E 6. ./build_docker/./programs/client/Client.cpp:375: DB::Client::main(std::vector> const&) @ 0x000000001bff437f E 7. ./build_docker/./base/poco/Util/src/Application.cpp:315: Poco::Util::Application::run() @ 0x00000000382716b7 E 8. ./build_docker/./programs/client/Client.cpp:1399: mainEntryClickHouseClient(int, char**) @ 0x000000001c010ec9 E 9. ./build_docker/./programs/main.cpp:269: main @ 0x000000000b27d81f E 10. ? @ 0x00007f3df24bbd90 E 11. ? @ 0x00007f3df24bbe40 E 12. _start @ 0x000000000b1a602e helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT * from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:58:14 [ 660 ] DEBUG : Executing query SELECT * from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) on s0_0_0 (cluster.py:3677, query) ____________________________ test_hive_partitioning ____________________________ started_cluster = def test_hive_partitioning(started_cluster): node = started_cluster.instances["s0_0_0"] for i in range(1, 5): > exists = node.query( f""" SELECT count() FROM s3('http://minio1:9001/root/data/hive/key={i}/*', 'minio', 'minio123', 'Parquet', 'key Int32, value Int32') GROUP BY ALL FORMAT TSV """ ) test_s3_cluster/test.py:794: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 209, stderr: Code: 209. DB::NetException: Timeout: connect timed out: 172.16.2.9:9000 (172.16.2.9:9000, connection timeout 10000 ms). (SOCKET_TIMEOUT), Stack trace (when copying this message, always include the lines below): E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x0000000038031254 E 1. ./build_docker/./src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0ed05 E 2. ./src/Common/Exception.h:105: DB::NetException::NetException(int, FormatStringHelperImpl::type, std::type_identity::type, std::type_identity::type>, String&&, String const&, long&&) @ 0x00000000307fa057 E 3. ./build_docker/./src/Client/Connection.cpp:321: DB::Connection::connect(DB::ConnectionTimeouts const&) @ 0x00000000307df744 E 4. ./build_docker/./src/Client/Connection.cpp:627: DB::Connection::getServerVersion(DB::ConnectionTimeouts const&, String&, unsigned long&, unsigned long&, unsigned long&, unsigned long&) @ 0x00000000307e6aaa E 5. ./build_docker/./programs/client/Client.cpp:479: DB::Client::connect() @ 0x000000001bff6862 E 6. ./build_docker/./programs/client/Client.cpp:375: DB::Client::main(std::vector> const&) @ 0x000000001bff437f E 7. ./build_docker/./base/poco/Util/src/Application.cpp:315: Poco::Util::Application::run() @ 0x00000000382716b7 E 8. ./build_docker/./programs/client/Client.cpp:1399: mainEntryClickHouseClient(int, char**) @ 0x000000001c010ec9 E 9. ./build_docker/./programs/main.cpp:269: main @ 0x000000000b27d81f E 10. ? @ 0x00007eff54688d90 E 11. ? @ 0x00007eff54688e40 E 12. _start @ 0x000000000b1a602e helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT count() FROM s3('http://minio1:9001/root/data/hive/key=1/*', 'minio', 'minio123', 'Parquet', 'key Int32, value Int32') GROUP BY ALL FORMAT TSV on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:58:25 [ 660 ] DEBUG : Executing query SELECT count() FROM s3('http://minio1:9001/root/data/hive/key=1/*', 'minio', 'minio123', 'Parquet', 'key Int32, value Int32') GROUP BY ALL FORMAT TSV on s0_0_0 (cluster.py:3677, query) ________ test_parallel_distributed_insert_select_with_schema_inference _________ started_cluster = def test_parallel_distributed_insert_select_with_schema_inference(started_cluster): node = started_cluster.instances["s0_0_0"] > node.query( """DROP TABLE IF EXISTS parallel_insert_select ON CLUSTER 'first_shard' SYNC;""" ) test_s3_cluster/test.py:436: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 209, stderr: Code: 209. DB::NetException: Timeout: connect timed out: 172.16.2.9:9000 (172.16.2.9:9000, connection timeout 10000 ms). (SOCKET_TIMEOUT), Stack trace (when copying this message, always include the lines below): E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x0000000038031254 E 1. ./build_docker/./src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0ed05 E 2. ./src/Common/Exception.h:105: DB::NetException::NetException(int, FormatStringHelperImpl::type, std::type_identity::type, std::type_identity::type>, String&&, String const&, long&&) @ 0x00000000307fa057 E 3. ./build_docker/./src/Client/Connection.cpp:321: DB::Connection::connect(DB::ConnectionTimeouts const&) @ 0x00000000307df744 E 4. ./build_docker/./src/Client/Connection.cpp:627: DB::Connection::getServerVersion(DB::ConnectionTimeouts const&, String&, unsigned long&, unsigned long&, unsigned long&, unsigned long&) @ 0x00000000307e6aaa E 5. ./build_docker/./programs/client/Client.cpp:479: DB::Client::connect() @ 0x000000001bff6862 E 6. ./build_docker/./programs/client/Client.cpp:375: DB::Client::main(std::vector> const&) @ 0x000000001bff437f E 7. ./build_docker/./base/poco/Util/src/Application.cpp:315: Poco::Util::Application::run() @ 0x00000000382716b7 E 8. ./build_docker/./programs/client/Client.cpp:1399: mainEntryClickHouseClient(int, char**) @ 0x000000001c010ec9 E 9. ./build_docker/./programs/main.cpp:269: main @ 0x000000000b27d81f E 10. ? @ 0x00007fc9095e4d90 E 11. ? @ 0x00007fc9095e4e40 E 12. _start @ 0x000000000b1a602e helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS parallel_insert_select ON CLUSTER 'first_shard' SYNC; on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:58:35 [ 660 ] DEBUG : Executing query DROP TABLE IF EXISTS parallel_insert_select ON CLUSTER 'first_shard' SYNC; on s0_0_0 (cluster.py:3677, query) ______________________________ test_remote_hedged ______________________________ started_cluster = def test_remote_hedged(started_cluster): node = started_cluster.instances["s0_0_0"] > pure_s3 = node.query( """ SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) LIMIT 1 """ ) test_s3_cluster/test.py:672: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 209, stderr: Code: 209. DB::NetException: Timeout: connect timed out: 172.16.2.9:9000 (172.16.2.9:9000, connection timeout 10000 ms). (SOCKET_TIMEOUT), Stack trace (when copying this message, always include the lines below): E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x0000000038031254 E 1. ./build_docker/./src/Common/Exception.cpp:105: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc0ed05 E 2. ./src/Common/Exception.h:105: DB::NetException::NetException(int, FormatStringHelperImpl::type, std::type_identity::type, std::type_identity::type>, String&&, String const&, long&&) @ 0x00000000307fa057 E 3. ./build_docker/./src/Client/Connection.cpp:321: DB::Connection::connect(DB::ConnectionTimeouts const&) @ 0x00000000307df744 E 4. ./build_docker/./src/Client/Connection.cpp:627: DB::Connection::getServerVersion(DB::ConnectionTimeouts const&, String&, unsigned long&, unsigned long&, unsigned long&, unsigned long&) @ 0x00000000307e6aaa E 5. ./build_docker/./programs/client/Client.cpp:479: DB::Client::connect() @ 0x000000001bff6862 E 6. ./build_docker/./programs/client/Client.cpp:375: DB::Client::main(std::vector> const&) @ 0x000000001bff437f E 7. ./build_docker/./base/poco/Util/src/Application.cpp:315: Poco::Util::Application::run() @ 0x00000000382716b7 E 8. ./build_docker/./programs/client/Client.cpp:1399: mainEntryClickHouseClient(int, char**) @ 0x000000001c010ec9 E 9. ./build_docker/./programs/main.cpp:269: main @ 0x000000000b27d81f E 10. ? @ 0x00007f1cfcb1ad90 E 11. ? @ 0x00007f1cfcb1ae40 E 12. _start @ 0x000000000b1a602e helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) LIMIT 1 on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:58:46 [ 660 ] DEBUG : Executing query SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) LIMIT 1 on s0_0_0 (cluster.py:3677, query) ____________________________ test_remote_no_hedged _____________________________ started_cluster = def test_remote_no_hedged(started_cluster): node = started_cluster.instances["s0_0_0"] > pure_s3 = node.query( """ SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) LIMIT 1 """ ) test_s3_cluster/test.py:699: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Net Exception: No route to host (172.16.2.9:9000). (NETWORK_ERROR) helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) LIMIT 1 on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:58:56 [ 660 ] DEBUG : Executing query SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) LIMIT 1 on s0_0_0 (cluster.py:3677, query) _______________________________ test_select_all ________________________________ started_cluster = def test_select_all(started_cluster): node = started_cluster.instances["s0_0_0"] > pure_s3 = node.query( """ SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""" ) test_s3_cluster/test.py:110: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Net Exception: No route to host (172.16.2.9:9000). (NETWORK_ERROR) helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:59:00 [ 660 ] DEBUG : Executing query SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) on s0_0_0 (cluster.py:3677, query) _________________________ test_skip_unavailable_shards _________________________ started_cluster = def test_skip_unavailable_shards(started_cluster): node = started_cluster.instances["s0_0_0"] > result = node.query( """ SELECT count(*) from s3Cluster( 'cluster_non_existent_port', 'http://minio1:9001/root/data/clickhouse/part1.csv', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') SETTINGS skip_unavailable_shards = 1 """ ) test_s3_cluster/test.py:315: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Net Exception: No route to host (172.16.2.9:9000). (NETWORK_ERROR) helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT count(*) from s3Cluster( 'cluster_non_existent_port', 'http://minio1:9001/root/data/clickhouse/part1.csv', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') SETTINGS skip_unavailable_shards = 1 on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:59:03 [ 660 ] DEBUG : Executing query SELECT count(*) from s3Cluster( 'cluster_non_existent_port', 'http://minio1:9001/root/data/clickhouse/part1.csv', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') SETTINGS skip_unavailable_shards = 1 on s0_0_0 (cluster.py:3677, query) ________________________________ test_union_all ________________________________ started_cluster = def test_union_all(started_cluster): node = started_cluster.instances["s0_0_0"] > pure_s3 = node.query( """ SELECT * FROM ( SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') UNION ALL SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ) ORDER BY (name, value, polygon) """ ) test_s3_cluster/test.py:204: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 210, stderr: Code: 210. DB::NetException: Net Exception: No route to host (172.16.2.9:9000). (NETWORK_ERROR) helpers/client.py:248: QueryRuntimeException ----------------------------- Captured stderr call ----------------------------- Executing query SELECT * FROM ( SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') UNION ALL SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ) ORDER BY (name, value, polygon) on s0_0_0 ------------------------------ Captured log call ------------------------------- 2025-04-01 23:59:06 [ 660 ] DEBUG : Executing query SELECT * FROM ( SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') UNION ALL SELECT * from s3( 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ) ORDER BY (name, value, polygon) on s0_0_0 (cluster.py:3677, query) --------------------------- Captured stderr teardown --------------------------- Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml stop --timeout 20] Stderr: Container roottests3cluster-resolver-1 Stopping Stderr: Container roottests3cluster-s0_0_0-1 Stopping Stderr: Container roottests3cluster-s0_1_0-1 Stopping Stderr: Container roottests3cluster-s0_0_1-1 Stopping Stderr: Container roottests3cluster-s0_0_0-1 Stopped Stderr: Container roottests3cluster-minio1-1 Stopping Stderr: Container roottests3cluster-minio1-1 Stopped Stderr: Container roottests3cluster-s0_0_1-1 Stopped Stderr: Container roottests3cluster-s0_1_0-1 Stopped Stderr: Container roottests3cluster-zoo2-1 Stopping Stderr: Container roottests3cluster-zoo3-1 Stopping Stderr: Container roottests3cluster-zoo1-1 Stopping Stderr: Container roottests3cluster-zoo3-1 Stopped Stderr: Container roottests3cluster-zoo2-1 Stopped Stderr: Container roottests3cluster-zoo1-1 Stopped Stderr: Container roottests3cluster-resolver-1 Stopped Stderr: Container roottests3cluster-proxy1-1 Stopping Stderr: Container roottests3cluster-proxy2-1 Stopping Stderr: Container roottests3cluster-proxy1-1 Stopped Stderr: Container roottests3cluster-proxy2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml down --volumes] Stderr: Container roottests3cluster-resolver-1 Stopping Stderr: Container roottests3cluster-s0_0_0-1 Stopping Stderr: Container roottests3cluster-s0_0_1-1 Stopping Stderr: Container roottests3cluster-s0_1_0-1 Stopping Stderr: Container roottests3cluster-resolver-1 Stopped Stderr: Container roottests3cluster-resolver-1 Removing Stderr: Container roottests3cluster-s0_1_0-1 Stopped Stderr: Container roottests3cluster-s0_1_0-1 Removing Stderr: Container roottests3cluster-s0_0_1-1 Stopped Stderr: Container roottests3cluster-s0_0_1-1 Removing Stderr: Container roottests3cluster-s0_0_0-1 Stopped Stderr: Container roottests3cluster-s0_0_0-1 Removing Stderr: Container roottests3cluster-s0_0_0-1 Removed Stderr: Container roottests3cluster-minio1-1 Stopping Stderr: Container roottests3cluster-s0_1_0-1 Removed Stderr: Container roottests3cluster-s0_0_1-1 Removed Stderr: Container roottests3cluster-minio1-1 Stopped Stderr: Container roottests3cluster-minio1-1 Removing Stderr: Container roottests3cluster-zoo2-1 Stopping Stderr: Container roottests3cluster-zoo1-1 Stopping Stderr: Container roottests3cluster-zoo3-1 Stopping Stderr: Container roottests3cluster-resolver-1 Removed Stderr: Container roottests3cluster-zoo1-1 Stopped Stderr: Container roottests3cluster-zoo1-1 Removing Stderr: Container roottests3cluster-zoo2-1 Stopped Stderr: Container roottests3cluster-zoo2-1 Removing Stderr: Container roottests3cluster-zoo3-1 Stopped Stderr: Container roottests3cluster-zoo3-1 Removing Stderr: Container roottests3cluster-zoo1-1 Removed Stderr: Container roottests3cluster-zoo2-1 Removed Stderr: Container roottests3cluster-zoo3-1 Removed Stderr: Container roottests3cluster-minio1-1 Removed Stderr: Container roottests3cluster-proxy1-1 Stopping Stderr: Container roottests3cluster-proxy2-1 Stopping Stderr: Container roottests3cluster-proxy1-1 Stopped Stderr: Container roottests3cluster-proxy1-1 Removing Stderr: Container roottests3cluster-proxy2-1 Stopped Stderr: Container roottests3cluster-proxy2-1 Removing Stderr: Container roottests3cluster-proxy1-1 Removed Stderr: Container roottests3cluster-proxy2-1 Removed Stderr: Volume roottests3cluster_data1-1 Removing Stderr: Network roottests3cluster_default Removing Stderr: Volume roottests3cluster_data1-1 Removed Stderr: Network roottests3cluster_default Removed Cleanup called Docker networks for project roottests3cluster are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottests3cluster are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottests3cluster are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottests3cluster-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottests3cluster Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/.env --project-name roottestrefreshablematviewreplicated --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/docker-compose.yml stop --timeout 20] Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/.env --project-name roottestrefreshablematviewreplicated --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/docker-compose.yml down --volumes] Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Removing Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Removing Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Removed Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Removed Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopping Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Removing Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Removing Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopped Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Removing Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Removed Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Removed Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Removed Stderr: Network roottestrefreshablematviewreplicated_default Removing Stderr: Network roottestrefreshablematviewreplicated_default Removed Cleanup called Docker networks for project roottestrefreshablematviewreplicated are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrefreshablematviewreplicated are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrefreshablematviewreplicated are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrefreshablematviewreplicated-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrefreshablematviewreplicated Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 ---------------------------- Captured log teardown ----------------------------- 2025-04-01 23:59:09 [ 660 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml stop --timeout 20] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/.env --project-name roottests3cluster --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-2/s0_1_0/docker-compose.yml down --volumes] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_0-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_1_0-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-s0_0_1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-resolver-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo2-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-zoo3-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-minio1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Container roottests3cluster-proxy2-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Volume roottests3cluster_data1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Network roottests3cluster_default Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Volume roottests3cluster_data1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stderr: Network roottests3cluster_default Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Docker networks for project roottests3cluster are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-01 23:59:30 [ 660 ] DEBUG : Docker containers for project roottests3cluster are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-01 23:59:30 [ 660 ] DEBUG : Docker volumes for project roottests3cluster are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[docker container list --all --filter name='^/roottests3cluster-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : No running containers for project: roottests3cluster (cluster.py:922, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-01 23:59:30 [ 660 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) 2025-04-01 23:59:30 [ 660 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/.env --project-name roottestrefreshablematviewreplicated --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/docker-compose.yml stop --timeout 20] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/.env --project-name roottestrefreshablematviewreplicated --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_refreshable_mat_view_replicated/_instances-2/node1_2/docker-compose.yml down --volumes] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-node1_2-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopping (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Stopped (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo3-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo1-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Container roottestrefreshablematviewreplicated-zoo2-1 Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Network roottestrefreshablematviewreplicated_default Removing (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stderr: Network roottestrefreshablematviewreplicated_default Removed (cluster.py:148, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : Docker networks for project roottestrefreshablematviewreplicated are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-01 23:59:32 [ 660 ] DEBUG : Docker containers for project roottestrefreshablematviewreplicated are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-01 23:59:32 [ 660 ] DEBUG : Docker volumes for project roottestrefreshablematviewreplicated are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-01 23:59:32 [ 660 ] DEBUG : Command:[docker container list --all --filter name='^/roottestrefreshablematviewreplicated-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : No running containers for project: roottestrefreshablematviewreplicated (cluster.py:922, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-01 23:59:32 [ 660 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-01 23:59:32 [ 660 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) ============================== slowest durations =============================== 25.90s setup test_refreshable_mat_view_replicated/test.py::test_long_query_cancel 23.52s teardown test_s3_cluster/test.py::test_union_all 20.21s setup test_s3_cluster/test.py::test_distributed_insert_select_with_replicated 12.04s call test_s3_cluster/test.py::test_distributed_insert_select_with_replicated 10.40s call test_s3_cluster/test.py::test_hive_partitioning 10.39s call test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference 10.39s call test_s3_cluster/test.py::test_remote_hedged 10.39s call test_s3_cluster/test.py::test_distributed_s3_table_engine 3.22s call test_s3_cluster/test.py::test_remote_no_hedged 3.02s call test_s3_cluster/test.py::test_select_all 2.97s call test_s3_cluster/test.py::test_union_all 2.92s call test_s3_cluster/test.py::test_skip_unavailable_shards 1.96s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True] 1.81s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False] 1.76s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True] 1.76s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False] 1.71s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False] 1.71s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True] 1.71s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True] 1.71s setup test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False] 1.01s setup test_refreshable_mat_view_replicated/test.py::test_query_fail 0.91s setup test_refreshable_mat_view_replicated/test.py::test_query_retry 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False] 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True] 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True] 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False] 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True] 0.22s call test_refreshable_mat_view_replicated/test.py::test_query_retry 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False] 0.22s call test_refreshable_mat_view_replicated/test.py::test_long_query_cancel 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True] 0.22s call test_refreshable_mat_view_replicated/test.py::test_query_fail 0.22s call test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False] 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False] 0.00s teardown test_s3_cluster/test.py::test_distributed_insert_select_with_replicated 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False] 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True] 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False] 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True] 0.00s teardown test_s3_cluster/test.py::test_remote_no_hedged 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_long_query_cancel 0.00s setup test_s3_cluster/test.py::test_remote_no_hedged 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False] 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True] 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True] 0.00s teardown test_s3_cluster/test.py::test_select_all 0.00s teardown test_s3_cluster/test.py::test_remote_hedged 0.00s setup test_s3_cluster/test.py::test_distributed_s3_table_engine 0.00s teardown test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference 0.00s teardown test_s3_cluster/test.py::test_skip_unavailable_shards 0.00s teardown test_s3_cluster/test.py::test_hive_partitioning 0.00s setup test_s3_cluster/test.py::test_select_all 0.00s setup test_s3_cluster/test.py::test_skip_unavailable_shards 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_query_retry 0.00s teardown test_refreshable_mat_view_replicated/test.py::test_query_fail 0.00s teardown test_s3_cluster/test.py::test_distributed_s3_table_engine 0.00s setup test_s3_cluster/test.py::test_union_all 0.00s setup test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference 0.00s setup test_s3_cluster/test.py::test_hive_partitioning 0.00s setup test_s3_cluster/test.py::test_remote_hedged =========================== short test summary info ============================ FAILED test_s3_cluster/test.py::test_distributed_insert_select_with_replicated FAILED test_s3_cluster/test.py::test_distributed_s3_table_engine - helpers.cl... FAILED test_s3_cluster/test.py::test_hive_partitioning - helpers.client.Query... FAILED test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference FAILED test_s3_cluster/test.py::test_remote_hedged - helpers.client.QueryRunt... FAILED test_s3_cluster/test.py::test_remote_no_hedged - helpers.client.QueryR... FAILED test_s3_cluster/test.py::test_select_all - helpers.client.QueryRuntime... FAILED test_s3_cluster/test.py::test_skip_unavailable_shards - helpers.client... FAILED test_s3_cluster/test.py::test_union_all - helpers.client.QueryRuntimeE... SKIPPED [1] test_refreshable_mat_view_replicated/test.py:516: Disabled for sanitizers SKIPPED [1] test_refreshable_mat_view_replicated/test.py:569: Disabled for sanitizers SKIPPED [1] test_refreshable_mat_view_replicated/test.py:600: Disabled for sanitizers SKIPPED [8] test_refreshable_mat_view_replicated/test.py:374: Disabled for sanitizers ================== 9 failed, 11 skipped in 154.84s (0:02:34) =================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 528, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_53ric3 --privileged --dns-search='.' --memory=30709035008 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS=" -rfEps --run-id=2 --color=no --durations=0 test_refreshable_mat_view_replicated/test.py::test_long_query_cancel test_refreshable_mat_view_replicated/test.py::test_query_fail test_refreshable_mat_view_replicated/test.py::test_query_retry 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause0-True-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-False-True]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-False]' 'test_refreshable_mat_view_replicated/test.py::test_real_wait_refresh[to_clause1-True-True]' test_s3_cluster/test.py::test_distributed_insert_select_with_replicated test_s3_cluster/test.py::test_distributed_s3_table_engine test_s3_cluster/test.py::test_hive_partitioning test_s3_cluster/test.py::test_parallel_distributed_insert_select_with_schema_inference test_s3_cluster/test.py::test_remote_hedged test_s3_cluster/test.py::test_remote_no_hedged test_s3_cluster/test.py::test_select_all test_s3_cluster/test.py::test_skip_unavailable_shards test_s3_cluster/test.py::test_union_all -vvv" altinityinfra/integration-tests-runner:cd6390247eca ' returned non-zero exit status 1.